543 research outputs found

    The power of the feed-forward sweep

    Get PDF
    Vision is fast and efficient. A novel natural scene can be categorized (e.g. does it contain an animal, a vehicle?) by human observers in less than 150 ms, and with minimal attentional resources. This ability still holds under strong backward masking conditions. In fact, with a stimulus onset asynchrony of about 30 ms (the time between the scene and mask onset), the first 30 ms of selective behavioral responses are essentially unaffected by the presence of the mask, suggesting that this type of “ultra-rapid” processing can rely on a sequence of swift feed-forward stages, in which the mask information never “catches up” with the scene information. Simulations show that the feed-forward propagation of the first wave of spikes generated at stimulus onset may indeed suffice for crude re-cognition or categorization. Scene awareness, however, may take significantly more time to develop, and probably requires feed-back processes. The main implication of these results for theories of masking is that pattern or metacontrast (backward) masking do not appear to bar the progression of visual information at a low level. These ideas bear interesting similarities to existing conceptualizations of priming and masking, such as Direct Parameter Specification or the Rapid Chase theory

    Dynamics of trimming the content of face representations for categorization in the brain

    Get PDF
    To understand visual cognition, it is imperative to determine when, how and with what information the human brain categorizes the visual input. Visual categorization consistently involves at least an early and a late stage: the occipito-temporal N170 event related potential related to stimulus encoding and the parietal P300 involved in perceptual decisions. Here we sought to understand how the brain globally transforms its representations of face categories from their early encoding to the later decision stage over the 400 ms time window encompassing the N170 and P300 brain events. We applied classification image techniques to the behavioral and electroencephalographic data of three observers who categorized seven facial expressions of emotion and report two main findings: (1) Over the 400 ms time course, processing of facial features initially spreads bilaterally across the left and right occipito-temporal regions to dynamically converge onto the centro-parietal region; (2) Concurrently, information processing gradually shifts from encoding common face features across all spatial scales (e.g. the eyes) to representing only the finer scales of the diagnostic features that are richer in useful information for behavior (e.g. the wide opened eyes in 'fear'; the detailed mouth in 'happy'). Our findings suggest that the brain refines its diagnostic representations of visual categories over the first 400 ms of processing by trimming a thorough encoding of features over the N170, to leave only the detailed information important for perceptual decisions over the P300

    Parametric study of EEG sensitivity to phase noise during face processing

    Get PDF
    <b>Background: </b> The present paper examines the visual processing speed of complex objects, here faces, by mapping the relationship between object physical properties and single-trial brain responses. Measuring visual processing speed is challenging because uncontrolled physical differences that co-vary with object categories might affect brain measurements, thus biasing our speed estimates. Recently, we demonstrated that early event-related potential (ERP) differences between faces and objects are preserved even when images differ only in phase information, and amplitude spectra are equated across image categories. Here, we use a parametric design to study how early ERP to faces are shaped by phase information. Subjects performed a two-alternative force choice discrimination between two faces (Experiment 1) or textures (two control experiments). All stimuli had the same amplitude spectrum and were presented at 11 phase noise levels, varying from 0% to 100% in 10% increments, using a linear phase interpolation technique. Single-trial ERP data from each subject were analysed using a multiple linear regression model. <b>Results: </b> Our results show that sensitivity to phase noise in faces emerges progressively in a short time window between the P1 and the N170 ERP visual components. The sensitivity to phase noise starts at about 120–130 ms after stimulus onset and continues for another 25–40 ms. This result was robust both within and across subjects. A control experiment using pink noise textures, which had the same second-order statistics as the faces used in Experiment 1, demonstrated that the sensitivity to phase noise observed for faces cannot be explained by the presence of global image structure alone. A second control experiment used wavelet textures that were matched to the face stimuli in terms of second- and higher-order image statistics. Results from this experiment suggest that higher-order statistics of faces are necessary but not sufficient to obtain the sensitivity to phase noise function observed in response to faces. <b>Conclusion: </b> Our results constitute the first quantitative assessment of the time course of phase information processing by the human visual brain. We interpret our results in a framework that focuses on image statistics and single-trial analyses

    The role of spatial frequency information for ERP components sensitive to faces and emotional facial expression

    Get PDF
    To investigate the impact of spatial frequency on emotional facial expression analysis, ERPs were recorded in response to low spatial frequency (LSF), high spatial frequency (HSF), and unfiltered broad spatial frequency (BSF) faces with fearful or neutral expressions, houses, and chairs. In line with previous findings, BSF fearful facial expressions elicited a greater frontal positivity than BSF neutral facial expressions, starting at about 150 ms after stimulus onset. In contrast, this emotional expression effect was absent for HSF and LSF faces. Given that some brain regions involved in emotion processing, such as amygdala and connected structures, are selectively tuned to LSF visual inputs, these data suggest that ERP effects of emotional facial expression do not directly reflect activity in these regions. It is argued that higher order neocortical brain systems are involved in the generation of emotion-specific waveform modulations. The face-sensitive N170 component was neither affected by emotional facial expression nor by spatial frequency information

    Emotion based attentional priority for storage in visual short-term memory

    Get PDF
    A plethora of research demonstrates that the processing of emotional faces is prioritised over non-emotive stimuli when cognitive resources are limited (this is known as ‘emotional superiority’). However, there is debate as to whether competition for processing resources results in emotional superiority per se, or more specifically, threat superiority. Therefore, to investigate prioritisation of emotional stimuli for storage in visual short-term memory (VSTM), we devised an original VSTM report procedure using schematic (angry, happy, neutral) faces in which processing competition was manipulated. In Experiment 1, display exposure time was manipulated to create competition between stimuli. Participants (n = 20) had to recall a probed stimulus from a set size of four under high (150 ms array exposure duration) and low (400 ms array exposure duration) perceptual processing competition. For the high competition condition (i.e. 150 ms exposure), results revealed an emotional superiority effect per se. In Experiment 2 (n = 20), we increased competition by manipulating set size (three versus five stimuli), whilst maintaining a constrained array exposure duration of 150 ms. Here, for the five-stimulus set size (i.e. maximal competition) only threat superiority emerged. These findings demonstrate attentional prioritisation for storage in VSTM for emotional faces. We argue that task demands modulated the availability of processing resources and consequently the relative magnitude of the emotional/threat superiority effect, with only threatening stimuli prioritised for storage in VSTM under more demanding processing conditions. Our results are discussed in light of models and theories of visual selection, and not only combine the two strands of research (i.e. visual selection and emotion), but highlight a critical factor in the processing of emotional stimuli is availability of processing resources, which is further constrained by task demands

    Does the Reading of Different Orthographies Produce Distinct Brain Activity Patterns? An ERP Study

    Get PDF
    Orthographies vary in the degree of transparency of spelling-sound correspondence. These range from shallow orthographies with transparent grapheme-phoneme relations, to deep orthographies, in which these relations are opaque. Only a few studies have examined whether orthographic depth is reflected in brain activity. In these studies a between-language design was applied, making it difficult to isolate the aspect of orthographic depth. In the present work this question was examined using a within-subject-and-language investigation. The participants were speakers of Hebrew, as they are skilled in reading two forms of script transcribing the same oral language. One form is the shallow pointed script (with diacritics), and the other is the deep unpointed script (without diacritics). Event-related potentials (ERPs) were recorded while skilled readers carried out a lexical decision task in the two forms of script. A visual non-orthographic task controlled for the visual difference between the scripts (resulting from the addition of diacritics to the pointed script only). At an early visual-perceptual stage of processing (∼165 ms after target onset), the pointed script evoked larger amplitudes with longer latencies than the unpointed script at occipital-temporal sites. However, these effects were not restricted to orthographic processing, and may therefore have reflected, at least in part, the visual load imposed by the diacritics. Nevertheless, the results implied that distinct orthographic processing may have also contributed to these effects. At later stages (∼340 ms after target onset) the unpointed script elicited larger amplitudes than the pointed one with earlier latencies. As this latency has been linked to orthographic-linguistic processing and to the classification of stimuli, it is suggested that these differences are associated with distinct lexical processing of a shallow and a deep orthography

    Diacritics improve comprehension of the Arabic script by providing access to the meanings of heterophonic homographs

    Get PDF
    The diacritical markers that represent most of the vowels in the Arabic orthography are generally omitted from written texts. Previous research revealed that the absence of diacritics reduces reading comprehension performance even by skilled readers of Arabic. One possible explanation is that many Arabic words become ambiguous when diacritics are missing. Words of this kind are known as heterophonic homographs and are associated with at least two different pronunciations and meanings when written without diacritics. The aim of the two experiments reported in this study was to investigate whether the presence of diacritics improves the comprehension of all written words, or whether the effects are confined to heterophonic homographs. In Experiment 1, adult readers of Arabic were asked to decide whether written words had a living meaning. The materials included heterophonic homographs that had one living and one non-living meaning. Results showed that diacritics significantly increased the accuracy of semantic decisions about ambiguous words but had no effect on the accuracy of decisions about unambiguous words. Consistent results were observed in Experiment 2 where the materials comprised sentences rather than single words. Overall, the findings suggest that diacritics improve the comprehension of heterophonic homographs by facilitating access to semantic representations that would otherwise be difficult to access from print

    Attentional Prioritization of Infant Faces Is Limited to Own-Race Infants

    Get PDF
    Background: Recent evidence indicates that infant faces capture attention automatically, presumably to elicit caregiving behavior from adults and leading to greater probability of progeny survival. Elsewhere, evidence demonstrates that people show deficiencies in the processing of other-race relative to own-race faces. We ask whether this other-race effect impacts on attentional attraction to infant faces. Using a dot-probe task to reveal the spatial allocation of attention, we investigate whether other-race infants capture attention. Principal Findings: South Asian and White participants (young adults aged 18–23 years) responded to a probe shape appearing in a location previously occupied by either an infant face or an adult face; across trials, the race (South Asian/ White) of the faces was manipulated. Results indicated that participants were faster to respond to probes that appeared in the same location as infant faces than adult faces, but only on own-race trials. Conclusions/Significance: Own-race infant faces attract attention, but other-race infant faces do not. Sensitivity to facespecific care-seeking cues in other-race kindenschema may be constrained by interracial contact and experience

    Mind Perception: Real but Not Artificial Faces Sustain Neural Activity beyond the N170/VPP

    Get PDF
    Faces are visual objects that hold special significance as the icons of other minds. Previous researchers using event-related potentials (ERPs) have found that faces are uniquely associated with an increased N170/vertex positive potential (VPP) and a more sustained frontal positivity. Here, we examined the processing of faces as objects vs. faces as cues to minds by contrasting images of faces possessing minds (human faces), faces lacking minds (doll faces), and non-face objects (i.e., clocks). Although both doll and human faces were associated with an increased N170/VPP from 175–200 ms following stimulus onset, only human faces were associated with a sustained positivity beyond 400 ms. Our data suggest that the N170/VPP reflects the object-based processing of faces, whether of dolls or humans; on the other hand, the later positivity appears to uniquely index the processing of human faces—which are more salient and convey information about identity and the presence of other minds
    corecore